Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 20 de 51.403
1.
Sci Rep ; 14(1): 10371, 2024 05 06.
Article En | MEDLINE | ID: mdl-38710806

Emotion is a human sense that can influence an individual's life quality in both positive and negative ways. The ability to distinguish different types of emotion can lead researchers to estimate the current situation of patients or the probability of future disease. Recognizing emotions from images have problems concealing their feeling by modifying their facial expressions. This led researchers to consider Electroencephalography (EEG) signals for more accurate emotion detection. However, the complexity of EEG recordings and data analysis using conventional machine learning algorithms caused inconsistent emotion recognition. Therefore, utilizing hybrid deep learning models and other techniques has become common due to their ability to analyze complicated data and achieve higher performance by integrating diverse features of the models. However, researchers prioritize models with fewer parameters to achieve the highest average accuracy. This study improves the Convolutional Fuzzy Neural Network (CFNN) for emotion recognition using EEG signals to achieve a reliable detection system. Initially, the pre-processing and feature extraction phases are implemented to obtain noiseless and informative data. Then, the CFNN with modified architecture is trained to classify emotions. Several parametric and comparative experiments are performed. The proposed model achieved reliable performance for emotion recognition with an average accuracy of 98.21% and 98.08% for valence (pleasantness) and arousal (intensity), respectively, and outperformed state-of-the-art methods.


Electroencephalography , Emotions , Fuzzy Logic , Neural Networks, Computer , Humans , Electroencephalography/methods , Emotions/physiology , Male , Female , Adult , Algorithms , Young Adult , Signal Processing, Computer-Assisted , Deep Learning , Facial Expression
2.
Sci Rep ; 14(1): 10871, 2024 05 13.
Article En | MEDLINE | ID: mdl-38740777

Reinforcement of the Internet of Medical Things (IoMT) network security has become extremely significant as these networks enable both patients and healthcare providers to communicate with each other by exchanging medical signals, data, and vital reports in a safe way. To ensure the safe transmission of sensitive information, robust and secure access mechanisms are paramount. Vulnerabilities in these networks, particularly at the access points, could expose patients to significant risks. Among the possible security measures, biometric authentication is becoming a more feasible choice, with a focus on leveraging regularly-monitored biomedical signals like Electrocardiogram (ECG) signals due to their unique characteristics. A notable challenge within all biometric authentication systems is the risk of losing original biometric traits, if hackers successfully compromise the biometric template storage space. Current research endorses replacement of the original biometrics used in access control with cancellable templates. These are produced using encryption or non-invertible transformation, which improves security by enabling the biometric templates to be changed in case an unwanted access is detected. This study presents a comprehensive framework for ECG-based recognition with cancellable templates. This framework may be used for accessing IoMT networks. An innovative methodology is introduced through non-invertible modification of ECG signals using blind signal separation and lightweight encryption. The basic idea here depends on the assumption that if the ECG signal and an auxiliary audio signal for the same person are subjected to a separation algorithm, the algorithm will yield two uncorrelated components through the minimization of a correlation cost function. Hence, the obtained outputs from the separation algorithm will be distorted versions of the ECG as well as the audio signals. The distorted versions of the ECG signals can be treated with a lightweight encryption stage and used as cancellable templates. Security enhancement is achieved through the utilization of the lightweight encryption stage based on a user-specific pattern and XOR operation, thereby reducing the processing burden associated with conventional encryption methods. The proposed framework efficacy is demonstrated through its application on the ECG-ID and MIT-BIH datasets, yielding promising results. The experimental evaluation reveals an Equal Error Rate (EER) of 0.134 on the ECG-ID dataset and 0.4 on the MIT-BIH dataset, alongside an exceptionally large Area under the Receiver Operating Characteristic curve (AROC) of 99.96% for both datasets. These results underscore the framework potential in securing IoMT networks through cancellable biometrics, offering a hybrid security model that combines the strengths of non-invertible transformations and lightweight encryption.


Computer Security , Electrocardiography , Internet of Things , Electrocardiography/methods , Humans , Algorithms , Signal Processing, Computer-Assisted , Biometric Identification/methods
3.
BMC Med Inform Decis Mak ; 24(1): 119, 2024 May 06.
Article En | MEDLINE | ID: mdl-38711099

The goal is to enhance an automated sleep staging system's performance by leveraging the diverse signals captured through multi-modal polysomnography recordings. Three modalities of PSG signals, namely electroencephalogram (EEG), electrooculogram (EOG), and electromyogram (EMG), were considered to obtain the optimal fusions of the PSG signals, where 63 features were extracted. These include frequency-based, time-based, statistical-based, entropy-based, and non-linear-based features. We adopted the ReliefF (ReF) feature selection algorithms to find the suitable parts for each signal and superposition of PSG signals. Twelve top features were selected while correlated with the extracted feature sets' sleep stages. The selected features were fed into the AdaBoost with Random Forest (ADB + RF) classifier to validate the chosen segments and classify the sleep stages. This study's experiments were investigated by obtaining two testing schemes: epoch-wise testing and subject-wise testing. The suggested research was conducted using three publicly available datasets: ISRUC-Sleep subgroup1 (ISRUC-SG1), sleep-EDF(S-EDF), Physio bank CAP sleep database (PB-CAPSDB), and S-EDF-78 respectively. This work demonstrated that the proposed fusion strategy overestimates the common individual usage of PSG signals.


Electroencephalography , Electromyography , Electrooculography , Machine Learning , Polysomnography , Sleep Stages , Humans , Sleep Stages/physiology , Adult , Male , Female , Signal Processing, Computer-Assisted
4.
PLoS One ; 19(5): e0302707, 2024.
Article En | MEDLINE | ID: mdl-38713653

Knee osteoarthritis (OA) is a prevalent, debilitating joint condition primarily affecting the elderly. This investigation aims to develop an electromyography (EMG)-based method for diagnosing knee pathologies. EMG signals of the muscles surrounding the knee joint were examined and recorded. The principal components of the proposed method were preprocessing, high-order spectral analysis (HOSA), and diagnosis/recognition through deep learning. EMG signals from individuals with normal and OA knees while walking were extracted from a publicly available database. This examination focused on the quadriceps femoris, the medial gastrocnemius, the rectus femoris, the semitendinosus, and the vastus medialis. Filtration and rectification were utilized beforehand to eradicate noise and smooth EMG signals. Signals' higher-order spectra were analyzed with HOSA to obtain information about nonlinear interactions and phase coupling. Initially, the bicoherence representation of EMG signals was devised. The resulting images were fed into a deep-learning system for identification and analysis. A deep learning algorithm using adapted ResNet101 CNN model examined the images to determine whether the EMG signals were conventional or indicative of knee osteoarthritis. The validated test results demonstrated high accuracy and robust metrics, indicating that the proposed method is effective. The medial gastrocnemius (MG) muscle was able to distinguish Knee osteoarthritis (KOA) patients from normal with 96.3±1.7% accuracy and 0.994±0.008 AUC. MG has the highest prediction accuracy of KOA and can be used as the muscle of interest in future analysis. Despite the proposed method's superiority, some limitations still require special consideration and will be addressed in future research.


Deep Learning , Electromyography , Knee Joint , Osteoarthritis, Knee , Humans , Electromyography/methods , Osteoarthritis, Knee/diagnosis , Osteoarthritis, Knee/physiopathology , Knee Joint/physiopathology , Male , Female , Muscle, Skeletal/physiopathology , Middle Aged , Signal Processing, Computer-Assisted , Algorithms , Adult , Aged
5.
Chaos ; 34(5)2024 May 01.
Article En | MEDLINE | ID: mdl-38717398

We use a multiscale symbolic approach to study the complex dynamics of temporal lobe refractory epilepsy employing high-resolution intracranial electroencephalogram (iEEG). We consider the basal and preictal phases and meticulously analyze the dynamics across frequency bands, focusing on high-frequency oscillations up to 240 Hz. Our results reveal significant periodicities and critical time scales within neural dynamics across frequency bands. By bandpass filtering neural signals into delta, theta, alpha, beta, gamma, and ripple high-frequency bands (HFO), each associated with specific neural processes, we examine the distinct nonlinear dynamics. Our method introduces a reliable approach to pinpoint intrinsic time lag scales τ within frequency bands of the basal and preictal signals, which are crucial for the study of refractory epilepsy. Using metrics such as permutation entropy (H), Fisher information (F), and complexity (C), we explore nonlinear patterns within iEEG signals. We reveal the intrinsic τmax that maximize complexity within each frequency band, unveiling the nonlinear subtle patterns of the temporal structures within the basal and preictal signal. Examining the H×F and C×F values allows us to identify differences in the delta band and a band between 200 and 220 Hz (HFO 6) when comparing basal and preictal signals. Differences in Fisher information in the delta and HFO 6 bands before seizures highlight their role in capturing important system dynamics. This offers new perspectives on the intricate relationship between delta oscillations and HFO waves in patients with focal epilepsy, highlighting the importance of these patterns and their potential as biomarkers.


Biomarkers , Delta Rhythm , Humans , Biomarkers/metabolism , Delta Rhythm/physiology , Electroencephalography/methods , Epilepsy/physiopathology , Signal Processing, Computer-Assisted , Male , Nonlinear Dynamics , Female , Adult , Epilepsy, Temporal Lobe/physiopathology
6.
Biomed Eng Online ; 23(1): 45, 2024 May 05.
Article En | MEDLINE | ID: mdl-38705982

BACKGROUND: Sleep-disordered breathing (SDB) affects a significant portion of the population. As such, there is a need for accessible and affordable assessment methods for diagnosis but also case-finding and long-term follow-up. Research has focused on exploiting cardiac and respiratory signals to extract proxy measures for sleep combined with SDB event detection. We introduce a novel multi-task model combining cardiac activity and respiratory effort to perform sleep-wake classification and SDB event detection in order to automatically estimate the apnea-hypopnea index (AHI) as severity indicator. METHODS: The proposed multi-task model utilized both convolutional and recurrent neural networks and was formed by a shared part for common feature extraction, a task-specific part for sleep-wake classification, and a task-specific part for SDB event detection. The model was trained with RR intervals derived from electrocardiogram and respiratory effort signals. To assess performance, overnight polysomnography (PSG) recordings from 198 patients with varying degree of SDB were included, with manually annotated sleep stages and SDB events. RESULTS: We achieved a Cohen's kappa of 0.70 in the sleep-wake classification task, corresponding to a Spearman's correlation coefficient (R) of 0.830 between the estimated total sleep time (TST) and the TST obtained from PSG-based sleep scoring. Combining the sleep-wake classification and SDB detection results of the multi-task model, we obtained an R of 0.891 between the estimated and the reference AHI. For severity classification of SBD groups based on AHI, a Cohen's kappa of 0.58 was achieved. The multi-task model performed better than a single-task model proposed in a previous study for AHI estimation, in particular for patients with a lower sleep efficiency (R of 0.861 with the multi-task model and R of 0.746 with single-task model with subjects having sleep efficiency < 60%). CONCLUSION: Assisted with automatic sleep-wake classification, our multi-task model demonstrated proficiency in estimating AHI and assessing SDB severity based on AHI in a fully automatic manner using RR intervals and respiratory effort. This shows the potential for improving SDB screening with unobtrusive sensors also for subjects with low sleep efficiency without adding additional sensors for sleep-wake detection.


Respiration , Signal Processing, Computer-Assisted , Sleep Apnea Syndromes , Sleep Apnea Syndromes/physiopathology , Sleep Apnea Syndromes/diagnosis , Humans , Male , Middle Aged , Polysomnography , Female , Machine Learning , Adult , Neural Networks, Computer , Electrocardiography , Aged , Wakefulness/physiology , Sleep
7.
Sci Rep ; 14(1): 10792, 2024 05 11.
Article En | MEDLINE | ID: mdl-38734752

Epilepsy is a chronic neurological disease, characterized by spontaneous, unprovoked, recurrent seizures that may lead to long-term disability and premature death. Despite significant efforts made to improve epilepsy detection clinically and pre-clinically, the pervasive presence of noise in EEG signals continues to pose substantial challenges to their effective application. In addition, discriminant features for epilepsy detection have not been investigated yet. The objective of this study is to develop a hybrid model for epilepsy detection from noisy and fragmented EEG signals. We hypothesized that a hybrid model could surpass existing single models in epilepsy detection. Our approach involves manual noise rejection and a novel statistical channel selection technique to detect epilepsy even from noisy EEG signals. Our proposed Base-2-Meta stacking classifier achieved notable accuracy (0.98 ± 0.05), precision (0.98 ± 0.07), recall (0.98 ± 0.05), and F1 score (0.98 ± 0.04) even with noisy 5-s segmented EEG signals. Application of our approach to the specific problem like detection of epilepsy from noisy and fragmented EEG data reveals a performance that is not only superior to others, but also is translationally relevant, highlighting its potential application in a clinic setting, where EEG signals are often noisy or scanty. Our proposed metric DF-A (Discriminant feature-accuracy), for the first time, identified the most discriminant feature with models that give A accuracy or above (A = 95 used in this study). This groundbreaking approach allows for detecting discriminant features and can be used as potential electrographic biomarkers in epilepsy detection research. Moreover, our study introduces innovative insights into the understanding of these features, epilepsy detection, and cross-validation, markedly improving epilepsy detection in ways previously unavailable.


Electroencephalography , Epilepsy , Electroencephalography/methods , Humans , Epilepsy/diagnosis , Epilepsy/physiopathology , Signal Processing, Computer-Assisted , Algorithms , Signal-To-Noise Ratio
8.
Sensors (Basel) ; 24(9)2024 Apr 24.
Article En | MEDLINE | ID: mdl-38732798

Photoplethysmography (PPG) is a non-invasive method used for cardiovascular monitoring, with multi-wavelength PPG (MW-PPG) enhancing its efficacy by using multiple wavelengths for improved assessment. This study explores how contact force (CF) variations impact MW-PPG signals. Data from 11 healthy subjects are analyzed to investigate the still understudied specific effects of CF on PPG signals. The obtained dataset includes simultaneous recording of five PPG wavelengths (470, 525, 590, 631, and 940 nm), CF, skin temperature, and the tonometric measurement derived from CF. The evolution of raw signals and the PPG DC and AC components are analyzed in relation to the increasing and decreasing faces of the CF. Findings reveal individual variability in signal responses related to skin and vasculature properties and demonstrate hysteresis and wavelength-dependent responses to CF changes. Notably, all wavelengths except 631 nm showed that the DC component of PPG signals correlates with CF trends, suggesting the potential use of this component as an indirect CF indicator. However, further validation is needed for practical application. The study underscores the importance of biomechanical properties at the measurement site and inter-individual variability and proposes the arterial pressure wave as a key factor in PPG signal formation.


Photoplethysmography , Humans , Photoplethysmography/methods , Male , Adult , Female , Signal Processing, Computer-Assisted , Skin Temperature/physiology , Young Adult
9.
Sensors (Basel) ; 24(9)2024 Apr 24.
Article En | MEDLINE | ID: mdl-38732808

Currently, surface EMG signals have a wide range of applications in human-computer interaction systems. However, selecting features for gesture recognition models based on traditional machine learning can be challenging and may not yield satisfactory results. Considering the strong nonlinear generalization ability of neural networks, this paper proposes a two-stream residual network model with an attention mechanism for gesture recognition. One branch processes surface EMG signals, while the other processes hand acceleration signals. Segmented networks are utilized to fully extract the physiological and kinematic features of the hand. To enhance the model's capacity to learn crucial information, we introduce an attention mechanism after global average pooling. This mechanism strengthens relevant features and weakens irrelevant ones. Finally, the deep features obtained from the two branches of learning are fused to further improve the accuracy of multi-gesture recognition. The experiments conducted on the NinaPro DB2 public dataset resulted in a recognition accuracy of 88.25% for 49 gestures. This demonstrates that our network model can effectively capture gesture features, enhancing accuracy and robustness across various gestures. This approach to multi-source information fusion is expected to provide more accurate and real-time commands for exoskeleton robots and myoelectric prosthetic control systems, thereby enhancing the user experience and the naturalness of robot operation.


Electromyography , Gestures , Neural Networks, Computer , Humans , Electromyography/methods , Signal Processing, Computer-Assisted , Pattern Recognition, Automated/methods , Acceleration , Algorithms , Hand/physiology , Machine Learning , Biomechanical Phenomena/physiology
10.
Sensors (Basel) ; 24(9)2024 Apr 24.
Article En | MEDLINE | ID: mdl-38732827

Arterial blood pressure (ABP) serves as a pivotal clinical metric in cardiovascular health assessments, with the precise forecasting of continuous blood pressure assuming a critical role in both preventing and treating cardiovascular diseases. This study proposes a novel continuous non-invasive blood pressure prediction model, DSRUnet, based on deep sparse residual U-net combined with improved SE skip connections, which aim to enhance the accuracy of using photoplethysmography (PPG) signals for continuous blood pressure prediction. The model first introduces a sparse residual connection approach for path contraction and expansion, facilitating richer information fusion and feature expansion to better capture subtle variations in the original PPG signals, thereby enhancing the network's representational capacity and predictive performance and mitigating potential degradation in the network performance. Furthermore, an enhanced SE-GRU module was embedded in the skip connections to model and weight global information using an attention mechanism, capturing the temporal features of the PPG pulse signals through GRU layers to improve the quality of the transferred feature information and reduce redundant feature learning. Finally, a deep supervision mechanism was incorporated into the decoder module to guide the lower-level network to learn effective feature representations, alleviating the problem of gradient vanishing and facilitating effective training of the network. The proposed DSRUnet model was trained and tested on the publicly available UCI-BP dataset, with the average absolute errors for predicting systolic blood pressure (SBP), diastolic blood pressure (DBP), and mean blood pressure (MBP) being 3.36 ± 6.61 mmHg, 2.35 ± 4.54 mmHg, and 2.21 ± 4.36 mmHg, respectively, meeting the standards set by the Association for the Advancement of Medical Instrumentation (AAMI), and achieving Grade A according to the British Hypertension Society (BHS) Standard for SBP and DBP predictions. Through ablation experiments and comparisons with other state-of-the-art methods, the effectiveness of DSRUnet in blood pressure prediction tasks, particularly for SBP, which generally yields poor prediction results, was significantly higher. The experimental results demonstrate that the DSRUnet model can accurately utilize PPG signals for real-time continuous blood pressure prediction and obtain high-quality and high-precision blood pressure prediction waveforms. Due to its non-invasiveness, continuity, and clinical relevance, the model may have significant implications for clinical applications in hospitals and research on wearable devices in daily life.


Blood Pressure , Photoplethysmography , Humans , Photoplethysmography/methods , Blood Pressure/physiology , Algorithms , Signal Processing, Computer-Assisted , Neural Networks, Computer , Blood Pressure Determination/methods
11.
Sensors (Basel) ; 24(9)2024 Apr 25.
Article En | MEDLINE | ID: mdl-38732846

Brain-computer interfaces (BCIs) allow information to be transmitted directly from the human brain to a computer, enhancing the ability of human brain activity to interact with the environment. In particular, BCI-based control systems are highly desirable because they can control equipment used by people with disabilities, such as wheelchairs and prosthetic legs. BCIs make use of electroencephalograms (EEGs) to decode the human brain's status. This paper presents an EEG-based facial gesture recognition method based on a self-organizing map (SOM). The proposed facial gesture recognition uses α, ß, and θ power bands of the EEG signals as the features of the gesture. The SOM-Hebb classifier is utilized to classify the feature vectors. We utilized the proposed method to develop an online facial gesture recognition system. The facial gestures were defined by combining facial movements that are easy to detect in EEG signals. The recognition accuracy of the system was examined through experiments. The recognition accuracy of the system ranged from 76.90% to 97.57% depending on the number of gestures recognized. The lowest accuracy (76.90%) occurred when recognizing seven gestures, though this is still quite accurate when compared to other EEG-based recognition systems. The implemented online recognition system was developed using MATLAB, and the system took 5.7 s to complete the recognition flow.


Brain-Computer Interfaces , Electroencephalography , Gestures , Humans , Electroencephalography/methods , Face/physiology , Algorithms , Pattern Recognition, Automated/methods , Signal Processing, Computer-Assisted , Brain/physiology , Male
12.
Sensors (Basel) ; 24(9)2024 Apr 29.
Article En | MEDLINE | ID: mdl-38732933

This paper investigates a method for precise mapping of human arm movements using sEMG signals. A multi-channel approach captures the sEMG signals, which, combined with the accurately calculated joint angles from an Inertial Measurement Unit, allows for action recognition and mapping through deep learning algorithms. Firstly, signal acquisition and processing were carried out, which involved acquiring data from various movements (hand gestures, single-degree-of-freedom joint movements, and continuous joint actions) and sensor placement. Then, interference signals were filtered out through filters, and the signals were preprocessed using normalization and moving averages to obtain sEMG signals with obvious features. Additionally, this paper constructs a hybrid network model, combining Convolutional Neural Networks and Artificial Neural Networks, and employs a multi-feature fusion algorithm to enhance the accuracy of gesture recognition. Furthermore, a nonlinear fitting between sEMG signals and joint angles was established based on a backpropagation neural network, incorporating momentum term and adaptive learning rate adjustments. Finally, based on the gesture recognition and joint angle prediction model, prosthetic arm control experiments were conducted, achieving highly accurate arm movement prediction and execution. This paper not only validates the potential application of sEMG signals in the precise control of robotic arms but also lays a solid foundation for the development of more intuitive and responsive prostheses and assistive devices.


Algorithms , Arm , Electromyography , Movement , Neural Networks, Computer , Signal Processing, Computer-Assisted , Humans , Electromyography/methods , Arm/physiology , Movement/physiology , Gestures , Male , Adult
13.
Sensors (Basel) ; 24(9)2024 Apr 30.
Article En | MEDLINE | ID: mdl-38732969

The recent scientific literature abounds in proposals of seizure forecasting methods that exploit machine learning to automatically analyze electroencephalogram (EEG) signals. Deep learning algorithms seem to achieve a particularly remarkable performance, suggesting that the implementation of clinical devices for seizure prediction might be within reach. However, most of the research evaluated the robustness of automatic forecasting methods through randomized cross-validation techniques, while clinical applications require much more stringent validation based on patient-independent testing. In this study, we show that automatic seizure forecasting can be performed, to some extent, even on independent patients who have never been seen during the training phase, thanks to the implementation of a simple calibration pipeline that can fine-tune deep learning models, even on a single epileptic event recorded from a new patient. We evaluate our calibration procedure using two datasets containing EEG signals recorded from a large cohort of epileptic subjects, demonstrating that the forecast accuracy of deep learning methods can increase on average by more than 20%, and that performance improves systematically in all independent patients. We further show that our calibration procedure works best for deep learning models, but can also be successfully applied to machine learning algorithms based on engineered signal features. Although our method still requires at least one epileptic event per patient to calibrate the forecasting model, we conclude that focusing on realistic validation methods allows to more reliably compare different machine learning approaches for seizure prediction, enabling the implementation of robust and effective forecasting systems that can be used in daily healthcare practice.


Algorithms , Deep Learning , Electroencephalography , Seizures , Humans , Electroencephalography/methods , Seizures/diagnosis , Seizures/physiopathology , Calibration , Signal Processing, Computer-Assisted , Epilepsy/diagnosis , Epilepsy/physiopathology , Machine Learning
14.
Sensors (Basel) ; 24(9)2024 May 01.
Article En | MEDLINE | ID: mdl-38733008

Bats play a pivotal role in maintaining ecological balance, and studying their behaviors offers vital insights into environmental health and aids in conservation efforts. Determining the presence of various bat species in an environment is essential for many bat studies. Specialized audio sensors can be used to record bat echolocation calls that can then be used to identify bat species. However, the complexity of bat calls presents a significant challenge, necessitating expert analysis and extensive time for accurate interpretation. Recent advances in neural networks can help identify bat species automatically from their echolocation calls. Such neural networks can be integrated into a complete end-to-end system that leverages recent internet of things (IoT) technologies with long-range, low-powered communication protocols to implement automated acoustical monitoring. This paper presents the design and implementation of such a system that uses a tiny neural network for interpreting sensor data derived from bat echolocation signals. A highly compact convolutional neural network (CNN) model was developed that demonstrated excellent performance in bat species identification, achieving an F1-score of 0.9578 and an accuracy rate of 97.5%. The neural network was deployed, and its performance was evaluated on various alternative edge devices, including the NVIDIA Jetson Nano and Google Coral.


Chiroptera , Echolocation , Neural Networks, Computer , Chiroptera/physiology , Chiroptera/classification , Animals , Echolocation/physiology , Acoustics , Signal Processing, Computer-Assisted , Vocalization, Animal/physiology
15.
Sensors (Basel) ; 24(9)2024 May 02.
Article En | MEDLINE | ID: mdl-38733015

Modern society increasingly recognizes brain fatigue as a critical factor affecting human health and productivity. This study introduces a novel, portable, cost-effective, and user-friendly system for real-time collection, monitoring, and analysis of physiological signals aimed at enhancing the precision and efficiency of brain fatigue recognition and broadening its application scope. Utilizing raw physiological data, this study constructed a compact dataset that incorporated EEG and ECG data from 20 subjects to index fatigue characteristics. By employing a Bayesian-optimized multi-granularity cascade forest (Bayes-gcForest) for fatigue state recognition, this study achieved recognition rates of 95.71% and 96.13% on the DROZY public dataset and constructed dataset, respectively. These results highlight the effectiveness of the multi-modal feature fusion model in brain fatigue recognition, providing a viable solution for cost-effective and efficient fatigue monitoring. Furthermore, this approach offers theoretical support for designing rest systems for researchers.


Bayes Theorem , Electroencephalography , Humans , Electroencephalography/methods , Fatigue/physiopathology , Fatigue/diagnosis , Electrocardiography/methods , Brain/physiology , Algorithms , Adult , Male , Female , Signal Processing, Computer-Assisted , Young Adult
16.
Sensors (Basel) ; 24(9)2024 May 06.
Article En | MEDLINE | ID: mdl-38733053

The fetal electrocardiogram (FECG) records changes in the graph of fetal cardiac action potential during conduction, reflecting the developmental status of the fetus in utero and its physiological cardiac activity. Morphological alterations in the FECG can indicate intrauterine hypoxia, fetal distress, and neonatal asphyxia early on, enhancing maternal and fetal safety through prompt clinical intervention, thereby reducing neonatal morbidity and mortality. To reconstruct FECG signals with clear morphological information, this paper proposes a novel deep learning model, CBLS-CycleGAN. The model's generator combines spatial features extracted by the CNN with temporal features extracted by the BiLSTM network, thus ensuring that the reconstructed signals possess combined features with spatial and temporal dependencies. The model's discriminator utilizes PatchGAN, employing small segments of the signal as discriminative inputs to concentrate the training process on capturing signal details. Evaluating the model using two real FECG signal databases, namely "Abdominal and Direct Fetal ECG Database" and "Fetal Electrocardiograms, Direct and Abdominal with Reference Heartbeat Annotations", resulted in a mean MSE and MAE of 0.019 and 0.006, respectively. It detects the FQRS compound wave with a sensitivity, positive predictive value, and F1 of 99.51%, 99.57%, and 99.54%, respectively. This paper's model effectively preserves the morphological information of FECG signals, capturing not only the FQRS compound wave but also the fetal P-wave, T-wave, P-R interval, and ST segment information, providing clinicians with crucial diagnostic insights and a scientific foundation for developing rational treatment protocols.


Electrocardiography , Neural Networks, Computer , Signal Processing, Computer-Assisted , Humans , Electrocardiography/methods , Female , Pregnancy , Deep Learning , Fetal Monitoring/methods , Algorithms , Fetus
17.
Sensors (Basel) ; 24(9)2024 May 06.
Article En | MEDLINE | ID: mdl-38733060

Deep neural networks (DNNs) are increasingly important in the medical diagnosis of electrocardiogram (ECG) signals. However, research has shown that DNNs are highly vulnerable to adversarial examples, which can be created by carefully crafted perturbations. This vulnerability can lead to potential medical accidents. This poses new challenges for the application of DNNs in the medical diagnosis of ECG signals. This paper proposes a novel network Channel Activation Suppression with Lipschitz Constraints Net (CASLCNet), which employs the Channel-wise Activation Suppressing (CAS) strategy to dynamically adjust the contribution of different channels to the class prediction and uses the 1-Lipschitz's ℓ∞ distance network as a robust classifier to reduce the impact of adversarial perturbations on the model itself in order to increase the adversarial robustness of the model. The experimental results demonstrate that CASLCNet achieves ACCrobust scores of 91.03% and 83.01% when subjected to PGD attacks on the MIT-BIH and CPSC2018 datasets, respectively, which proves that the proposed method in this paper enhances the model's adversarial robustness while maintaining a high accuracy rate.


Algorithms , Electrocardiography , Neural Networks, Computer , Electrocardiography/methods , Humans , Signal Processing, Computer-Assisted
18.
J Vis Exp ; (206)2024 Apr 26.
Article En | MEDLINE | ID: mdl-38738870

The interplay between the brain and the cardiovascular systems is garnering increased attention for its potential to advance our understanding of human physiology and improve health outcomes. However, the multimodal analysis of these signals is challenging due to the lack of guidelines, standardized signal processing and statistical tools, graphical user interfaces (GUIs), and automation for processing large datasets or increasing reproducibility. A further void exists in standardized EEG and heart-rate variability (HRV) feature extraction methods, undermining clinical diagnostics or the robustness of machine learning (ML) models. In response to these limitations, we introduce the BrainBeats toolbox. Implemented as an open-source EEGLAB plugin, BrainBeats integrates three main protocols: 1) Heartbeat-evoked potentials (HEP) and oscillations (HEO) for assessing time-locked brain-heart interplay at the millisecond accuracy; 2) EEG and HRV feature extraction for examining associations/differences between various brain and heart metrics or for building robust feature-based ML models; 3) Automated extraction of heart artifacts from EEG signals to remove any potential cardiovascular contamination while conducting EEG analysis. We provide a step-by-step tutorial for applying these three methods to an open-source dataset containing simultaneous 64-channel EEG, ECG, and PPG signals. Users can easily fine-tune parameters to tailor their unique research needs using the graphical user interface (GUI) or the command line. BrainBeats should make brain-heart interplay research more accessible and reproducible.


Electroencephalography , Heart Rate , Humans , Electroencephalography/methods , Heart Rate/physiology , Signal Processing, Computer-Assisted , Software , Brain/physiology , Machine Learning
19.
Comput Biol Med ; 175: 108510, 2024 Jun.
Article En | MEDLINE | ID: mdl-38691913

BACKGROUND: The seizure prediction algorithms have demonstrated their potential in mitigating epilepsy risks by detecting the pre-ictal state using ongoing electroencephalogram (EEG) signals. However, most of them require high-density EEG, which is burdensome to the patients for daily monitoring. Moreover, prevailing seizure models require extensive training with significant labeled data which is very time-consuming and demanding for the epileptologists. METHOD: To address these challenges, here we propose an adaptive channel selection strategy and a semi-supervised deep learning model respectively to reduce the number of EEG channels and to limit the amount of labeled data required for accurate seizure prediction. Our channel selection module is centered on features from EEG power spectra parameterization that precisely characterize the epileptic activities to identify the seizure-associated channels for each patient. The semi-supervised model integrates generative adversarial networks and bidirectional long short-term memory networks to enhance seizure prediction. RESULTS: Our approach is evaluated on the CHB-MIT and Siena epilepsy datasets. With utilizing only 4 channels, the method demonstrates outstanding performance with an AUC of 93.15% on the CHB-MIT dataset and an AUC of 88.98% on the Siena dataset. Experimental results also demonstrate that our selection approach reduces the model parameters and training time. CONCLUSIONS: Adaptive channel selection coupled with semi-supervised learning can offer the possible bases for a light weight and computationally efficient seizure prediction system, making the daily monitoring practical to improve patients' quality of life.


Electroencephalography , Seizures , Humans , Electroencephalography/methods , Seizures/physiopathology , Seizures/diagnosis , Signal Processing, Computer-Assisted , Deep Learning , Algorithms , Databases, Factual , Epilepsy/physiopathology , Supervised Machine Learning
20.
Comput Biol Med ; 175: 108504, 2024 Jun.
Article En | MEDLINE | ID: mdl-38701593

Convolutional neural network (CNN) has been widely applied in motor imagery (MI)-based brain computer interface (BCI) to decode electroencephalography (EEG) signals. However, due to the limited perceptual field of convolutional kernel, CNN only extracts features from local region without considering long-term dependencies for EEG decoding. Apart from long-term dependencies, multi-modal temporal information is equally important for EEG decoding because it can offer a more comprehensive understanding of the temporal dynamics of neural processes. In this paper, we propose a novel deep learning network that combines CNN with self-attention mechanism to encapsulate multi-modal temporal information and global dependencies. The network first extracts multi-modal temporal information from two distinct perspectives: average and variance. A shared self-attention module is then designed to capture global dependencies along these two feature dimensions. We further design a convolutional encoder to explore the relationship between average-pooled and variance-pooled features and fuse them into more discriminative features. Moreover, a data augmentation method called signal segmentation and recombination is proposed to improve the generalization capability of the proposed network. The experimental results on the BCI Competition IV-2a (BCIC-IV-2a) and BCI Competition IV-2b (BCIC-IV-2b) datasets show that our proposed method outperforms the state-of-the-art methods and achieves 4-class average accuracy of 85.03% on the BCIC-IV-2a dataset. The proposed method implies the effectiveness of multi-modal temporal information fusion in attention-based deep learning networks and provides a new perspective for MI-EEG decoding. The code is available at https://github.com/Ma-Xinzhi/EEG-TransNet.


Brain-Computer Interfaces , Electroencephalography , Neural Networks, Computer , Humans , Electroencephalography/methods , Signal Processing, Computer-Assisted , Imagination/physiology , Deep Learning
...